'Godfather of AI' Quits Google, Warns of Serious Technology Dangers
2023-05-04
LRC
TXT
大字
小字
滚动
全页
1A man widely considered the "godfather" of artificial intelligence (AI) says he quit his job at Google to speak freely about the dangers of the technology.
2Geoffrey Hinton recently spoke to The New York Times and other press about his experiences at Google, and his wider concerns about AI development.
3He told the Times he left the search engine company last month after leading the Google Research team in Toronto, Canada for 10 years.
4During his career, the 75-year-old Hinton has pioneered work on deep learning and neural networks.
5A neural network is a computer processing system built to act like the human brain.
6Hinton's work helped form the base for much of the AI technology in use today.
7In 2019, Hinton and three other computer scientists received the Turing Award for their separate work related to neural networks.
8The award has been described as the "Nobel Prize of Computing."
9The other two winners, Yoshua Bengio and Yann LeCun, have also expressed concerns about the future of AI.
10In recent months, a number of new AI technologies have been introduced.
11Microsoft-backed American startup OpenAI launched its latest AI model, ChatGPT-4, in March.
12Other technology companies have invested in computing tools, including Google's Bard system. Such tools are known as "chatbots."
13The recently released AI tools have demonstrated the ability to perform human-like discussions and create complex documents based on short, written commands.
14Speaking to the BBC, Hinton called the dangers of such tools "quite scary."
15He added, "Right now, they're not more intelligent than us, as far as I can tell. But I think they soon will be."
16He said he believes AI systems are getting smarter because of the massive amounts of data they take in and examine.
17Hinton also told MIT Technology Review he fears some "bad" individuals might use AI in ways that could seriously harm society.
18Such effects could include AI systems interfering in elections or inciting violence.
19He told the Times he thinks AI systems could create a world in which people will "not be able to know what is true anymore."
20Hinton said he retired from Google so that he could speak openly about the possible risks of the technology as someone who no longer works for the company.
21"I want to talk about AI safety issues without having to worry about how it interacts with Google's business," he told MIT Technology Review.
22Since announcing his departure, Hinton has said he thinks Google had "acted very responsibly" in its own AI development.
23In March, hundreds of AI experts and industry leaders released an open letter expressing deep concerns about current AI development efforts.
24The letter identified a number of harms that could result from such development.
25These included increases in propaganda and misinformation,
26the loss of millions of jobs to machines and the possibility that AI could one day take control of our civilization.
27The letter urges a halt to development of some kinds of AI.
28Turing Prize winner Bengio, Apple co-founder Steve Wozniak and Elon Musk, leader of SpaceX, Tesla and Twitter signed the letter.
29The organization that released the letter, Future of Life, is financially supported by the Musk Foundation.
30Musk has long warned of the possible dangers of AI.
31Last month, he told Fox News he planned to create his own version of some AI tools released in recent months.
32Musk said his new AI tool would be called TruthGPT.
33He described it as "truth-seeking AI" that will seek to understand humanity so it is less likely to destroy it.
34Alondra Nelson is the former head of the White House Office of Science and Technology Policy, which seeks to create guidelines for the responsible use of AI tools.
35She told The Associated Press, "For good or for not, what the chatbot moment has done is made AI a national conversation and an international conversation that doesn't only include AI experts and developers."
36Nelson added that she hopes the recent attention on AI can create "a new conversation about what we want a democratic future and a non-exploitative future with technology to look like."
37I'm Bryan Lynn,
1A man widely considered the "godfather" of artificial intelligence (AI) says he quit his job at Google to speak freely about the dangers of the technology. 2Geoffrey Hinton recently spoke to The New York Times and other press about his experiences at Google, and his wider concerns about AI development. He told the Times he left the search engine company last month after leading the Google Research team in Toronto, Canada for 10 years. 3During his career, the 75-year-old Hinton has pioneered work on deep learning and neural networks. A neural network is a computer processing system built to act like the human brain. Hinton's work helped form the base for much of the AI technology in use today. 4In 2019, Hinton and three other computer scientists received the Turing Award for their separate work related to neural networks. The award has been described as the "Nobel Prize of Computing." The other two winners, Yoshua Bengio and Yann LeCun, have also expressed concerns about the future of AI. 5In recent months, a number of new AI technologies have been introduced. Microsoft-backed American startup OpenAI launched its latest AI model, ChatGPT-4, in March. Other technology companies have invested in computing tools, including Google's Bard system. Such tools are known as "chatbots." 6The recently released AI tools have demonstrated the ability to perform human-like discussions and create complex documents based on short, written commands. 7Speaking to the BBC, Hinton called the dangers of such tools "quite scary." He added, "Right now, they're not more intelligent than us, as far as I can tell. But I think they soon will be." He said he believes AI systems are getting smarter because of the massive amounts of data they take in and examine. 8Hinton also told MIT Technology Review he fears some "bad" individuals might use AI in ways that could seriously harm society. Such effects could include AI systems interfering in elections or inciting violence. 9He told the Times he thinks AI systems could create a world in which people will "not be able to know what is true anymore." 10Hinton said he retired from Google so that he could speak openly about the possible risks of the technology as someone who no longer works for the company. "I want to talk about AI safety issues without having to worry about how it interacts with Google's business," he told MIT Technology Review. 11Since announcing his departure, Hinton has said he thinks Google had "acted very responsibly" in its own AI development. 12In March, hundreds of AI experts and industry leaders released an open letter expressing deep concerns about current AI development efforts. The letter identified a number of harms that could result from such development. 13These included increases in propaganda and misinformation, the loss of millions of jobs to machines and the possibility that AI could one day take control of our civilization. The letter urges a halt to development of some kinds of AI. 14Turing Prize winner Bengio, Apple co-founder Steve Wozniak and Elon Musk, leader of SpaceX, Tesla and Twitter signed the letter. The organization that released the letter, Future of Life, is financially supported by the Musk Foundation. 15Musk has long warned of the possible dangers of AI. Last month, he told Fox News he planned to create his own version of some AI tools released in recent months. Musk said his new AI tool would be called TruthGPT. He described it as "truth-seeking AI" that will seek to understand humanity so it is less likely to destroy it. 16Alondra Nelson is the former head of the White House Office of Science and Technology Policy, which seeks to create guidelines for the responsible use of AI tools. She told The Associated Press, "For good or for not, what the chatbot moment has done is made AI a national conversation and an international conversation that doesn't only include AI experts and developers." 17Nelson added that she hopes the recent attention on AI can create "a new conversation about what we want a democratic future and a non-exploitative future with technology to look like." 18I'm Bryan Lynn, 19Bryan Lynn wrote this story for VOA Learning English, based on reports from The Associated Press, Reuters and Agence France-Presse. 20_______________________________________________________________ 21Words in This Story 22artificial intelligence - v. an area of computer science that deals with giving machines the ability to seem like they have human intelligence 23pioneer - v. to be one of the first people to do something 24scary - adj. frightening; causing fear 25conversation - n. the act of speaking to another person 26exploitative - adj. unfairly or cynically using another person or group for profit or advantage 27__________________________________________________________________ 28What do you think of this story? We want to hear from you. We have a new comment system. Here is how it works: 29Each time you return to comment on the Learning English site, you can use your account and see your comments and replies to them. Our comment policy is here.